Yeah I have no life.
This is a needlessly over-engineered Genshin Impact install methdology for Linux players.
A pattern for building personal knowledge bases using LLMs.
This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.
Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.
| #EXTM3U | |
| ๐น๐น๐น๐น๐น๐น๐น๐น๐น๐น๐น๐น๐น๐น๐น๐น๐น๐น๐น๐น๐น๐น๐น๐น๐น๐น๐น [Live Event Starts] ๐น๐น๐น๐น๐น๐น๐น๐น๐น๐น๐น๐น๐น๐น๐น๐น๐น๐น๐น๐น๐น๐น๐น๐น๐น๐น๐น | |
| #EXTINF:-1 tvg-id="tvri.sport.id" tvg-name="tvri sport" group-title="Live Event" tvg-logo="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiIEEQLkwsBfX7uk8ATBEloUOVKlDNjUoQTxF7p_zv4PzVehEFwTcuEB5iXzftF5N2zXK01xgzrbwld7d3RR6nfkS51TuftSPoc05RhXlFDcQwbrOWkJ_yPs14hZBwBTsY9pDi4GgUb8E8uy9GQGqZGdGQJqwc6eIsREptqY1M1qhfpkRuOhqTLRskOp_8/s700/TVRISportsTV.jpg",TVRI Sports | |
| #EXTVLCOPT:http-user-agent=TVRIKLIK/7.0 (Linux;Android 15.0.0;) ExoPlayerLib/2.19.1 | |
| #EXT-X-TARGETDURATION:2 | |
| #EXT-X-MEDIA-SEQUENCE:1 | |
| #EXTVLCOPT:network-caching=10000 | |
| #EXTVLCOPT:http-reconnect=true | |
| https://ott-balancer.tvri.go.id/live/eds/SportHD/hls/SportHD.m3u8 |
See how a minor change to your commit message style can make a difference.
git commit -m"<type>(<optional scope>): <description>" \ -m"<optional body>" \ -m"<optional footer>"
| #!/usr/bin/env bash | |
| set -euo pipefail | |
| # patch-claude-code.sh โ Rebalance Claude Code prompts to fix corner-cutting behavior | |
| # | |
| # What this does: | |
| # Patches the npm-installed @anthropic-ai/claude-code cli.js to rebalance | |
| # system prompt instructions that cause the model to cut corners, simplify | |
| # excessively, and defer complicated work. | |
| # |
| #!/bin/zsh | |
| # Test if the Swift compiler knows about a particular language feature. | |
| # | |
| # Usage: | |
| # | |
| # swift-has-feature [--swift SWIFT_PATH] [--language-version LANGUAGE_VERSION] FEATURE | |
| # | |
| # The feature should be an upcoming or experimental language feature, | |
| # such as `"StrictConcurrency"` or `"ExistentialAny"`. |
A pattern for building personal knowledge bases using LLMs. Extended with lessons from building agentmemory, a persistent memory engine for AI coding agents.
This builds on Andrej Karpathy's original LLM Wiki idea file. Everything in the original still applies. This document adds what we learned running the pattern in production: what breaks at scale, what's missing, and what separates a wiki that stays useful from one that rots.
The core insight is correct: stop re-deriving, start compiling. RAG retrieves and forgets. A wiki accumulates and compounds. The three-layer architecture (raw sources, wiki, schema) works. The operations (ingest, query, lint) cover the basics. If you haven't read the original, start there.
A pattern for building personal knowledge bases using LLMs.
This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.
Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.